Search results for "motion estimation."
showing 10 items of 56 documents
Rotation estimation and vanishing point extraction by omnidirectional vision in urban environment
2012
International audience; Rotation estimation is a fundamental step for various robotic applications such as automatic control of ground/aerial vehicles, motion estimation and 3D reconstruction. However it is now well established that traditional navigation equipments, such as global positioning systems (GPSs) or inertial measurement units (IMUs), suffer from several disadvantages. Hence, some vision-based works have been proposed recently. Whereas interesting results can be obtained, the existing methods have non-negligible limitations such as a difficult feature matching (e.g. repeated textures, blur or illumination changes) and a high computational cost (e.g. analyze in the frequency domai…
Static and Dynamic Objects Analysis as a 3D Vector Field
2017
International audience; In the context of scene modelling, understanding, and landmark-based robot navigation, the knowledge of static scene parts and moving objects with their motion behaviours plays a vital role. We present a complete framework to detect and extract the moving objects to reconstruct a high quality static map. For a moving 3D camera setup, we propose a novel 3D Flow Field Analysis approach which accurately detects the moving objects using only 3D point cloud information. Further, we introduce a Sparse Flow Clustering approach to effectively and robustly group the motion flow vectors. Experiments show that the proposed Flow Field Analysis algorithm and Sparse Flow Clusterin…
Homography based egomotion estimation with a common direction
2017
International audience; In this paper, we explore the different minimal solutions for egomotion estimation of a camera based on homography knowing the gravity vector between calibrated images. These solutions depend on the prior knowledge about the reference plane used by the homography. We then demonstrate that the number of matched points can vary from two to three and that a direct closed-form solution or a Gröbner basis based solution can be derived according to this plane. Many experimental results on synthetic and real sequences in indoor and outdoor environments show the efficiency and the robustness of our approach compared to standard methods.
Real time UAV altitude, attitude and motion estimation from hybrid stereovision
2012
International audience; Knowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during crit- ical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and de- tecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera con- tributes to evaluating attitude, while the perspective camera contributes to estimating t…
Adapted Approach for Omnidirectional Egomotion Estimation
2011
Egomotion estimation is based principally on the estimation of the optical flow in the image. Recent research has shown that the use of omnidirectional systems with large fields of view allow overcoming the limitation presented in planar-projection imagery in order to address the problem of motion analysis. For omnidirectional images, the 2D motion is often estimated using methods developed for perspective images. This paper adapts motion field calculated using adapted method which takes into account the distortions existing in the omnidirectional image. This 2D motion field is then used as input to the egomotion estimation process using spherical representation of the motion equation. Expe…
An optimal population code for global motion estimation in local direction-selective cells
2021
AbstractNervous systems allocate computational resources to match stimulus statistics. However, the physical information that needs to be processed depends on the animal’s own behavior. For example, visual motion patterns induced by self-motion provide essential information for navigation. How behavioral constraints affect neural processing is not known. Here we show that, at the population level, local direction-selective T4/T5 neurons in Drosophila represent optic flow fields generated by self-motion, reminiscent to a population code in retinal ganglion cells in vertebrates. Whereas in vertebrates four different cell types encode different optic flow fields, the four uniformly tuned T4/T5…
High Quality Reconstruction of Dynamic Objects using 2D-3D Camera Fusion
2017
International audience; In this paper, we propose a complete pipeline for high quality reconstruction of dynamic objects using 2D-3D camera setup attached to a moving vehicle. Starting from the segmented motion trajectories of individual objects, we compute their precise motion parameters, register multiple sparse point clouds to increase the density, and develop a smooth and textured surface from the dense (but scattered) point cloud. The success of our method relies on the proposed optimization framework for accurate motion estimation between two sparse point clouds. Our formulation for fusing it closest-point and it consensus based motion estimations, respectively in the absence and pres…
Kinematic features of movement tunes perception and action coupling
2005
How do we extrapolate the final position of hand trajectory that suddenly vanishes behind a wall? Studies showing maintenance of cortical activity after objects in motion disappear suggest that internal model of action may be recalled to reconstruct the missing part of the trajectory. Although supported by neurophysiological and brain imaging studies, behavioural evidence for this hypothesis is sparse. Further, in humans, it is unknown if the recall of internal model of action at motion observation can be tuned with kinematic features of movement. Here, we propose a novel experiment to address this question. Each stimulus consisted of a dot moving either upwards or downwards, and correspond…
Background subtraction for aerial surveillance conditions
2014
International audience; The first step in a surveillance system is to create a representation of the environment. Background subtraction is widely used algorithm to define a part of an image that most time remains stationary in a video. In surveillance tasks, this model helps to recognize those outlier objects in an area under monitoring. Set up a background model on moving platforms (intelligent cars, UAVs, etc.) is a challenging task due camera motion when images are acquired. In this paper, we propose a method to support instabilities caused by aerial images fusing spatial and temporal information about image motion. We used frame difference as first approximation, then age of pixels is …
A Fast GPU-Based Motion Estimation Algorithm for H.264/AVC
2012
H.264/AVC is the most recent predictive video compression standard to outperform other existing video coding standards by means of higher computational complexity. In recent years, heterogeneous computing has emerged as a cost-efficient solution for high-performance computing. In the literature, several algorithms have been proposed to accelerate video compression, but so far there have not been many solutions that deal with video codecs using heterogeneous systems. This paper proposes an algorithm to perform H.264/AVC inter prediction. The proposed algorithm performs the motion estimation, both with full-pixel and sub-pixel accuracy, using CUDA to assist the CPU, obtaining remarkable time …